41 research outputs found

    The Illogicality of Stock-Brokers: Psychological Experiments on the Effects of Prior Knowledge and Belief Biases on Logical Reasoning in Stock Trading

    Get PDF
    BACKGROUND: Explanations for the current worldwide financial crisis are primarily provided by economists and politicians. However, in the present work we focus on the psychological-cognitive factors that most likely affect the thinking of people on the economic stage and thus might also have had an effect on the progression of the crises. One of these factors might be the effect of prior beliefs on reasoning and decision-making. So far, this question has been explored only to a limited extent. METHODS: We report two experiments on logical reasoning competences of nineteen stock-brokers with long-lasting vocational experiences at the stock market. The premises of reasoning problems concerned stock trading and the experiments varied whether or not their conclusions--a proposition which is reached after considering the premises--agreed with the brokers' prior beliefs. Half of the problems had a conclusion that was highly plausible for stock-brokers while the other half had a highly implausible conclusion. RESULTS: The data show a strong belief bias. Stock-brokers were strongly biased by their prior knowledge. Lowest performance was found for inferences in which the problems caused a conflict between logical validity and the experts' belief. In these cases, the stock-brokers tended to make logically invalid inferences rather than give up their existing beliefs. CONCLUSIONS: Our findings support the thesis that cognitive factors have an effect on the decision-making on the financial market. In the present study, stock-brokers were guided more by past experience and existing beliefs than by logical thinking and rational decision-making. They had difficulties to disengage themselves from vastly anchored thinking patterns. However, we believe, that it is wrong to accuse the brokers for their "malfunctions", because such hard-wired cognitive principles are difficult to suppress even if the person is aware of them

    Interpretation of evidence in data by untrained medical students: a scenario-based study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>To determine which approach to assessment of evidence in data - statistical tests or likelihood ratios - comes closest to the interpretation of evidence by untrained medical students.</p> <p>Methods</p> <p>Empirical study of medical students (N = 842), untrained in statistical inference or in the interpretation of diagnostic tests. They were asked to interpret a hypothetical diagnostic test, presented in four versions that differed in the distributions of test scores in diseased and non-diseased populations. Each student received only one version. The intuitive application of the statistical test approach would lead to rejecting the null hypothesis of no disease in version A, and to accepting the null in version B. Application of the likelihood ratio approach led to opposite conclusions - against the disease in A, and in favour of disease in B. Version C tested the importance of the p-value (A: 0.04 versus C: 0.08) and version D the importance of the likelihood ratio (C: 1/4 versus D: 1/8).</p> <p>Results</p> <p>In version A, 7.5% concluded that the result was in favour of disease (compatible with p value), 43.6% ruled against the disease (compatible with likelihood ratio), and 48.9% were undecided. In version B, 69.0% were in favour of disease (compatible with likelihood ratio), 4.5% against (compatible with p value), and 26.5% undecided. Increasing the p value from 0.04 to 0.08 did not change the results. The change in the likelihood ratio from 1/4 to 1/8 increased the proportion of non-committed responses.</p> <p>Conclusions</p> <p>Most untrained medical students appear to interpret evidence from data in a manner that is compatible with the use of likelihood ratios.</p

    Studying strategies and types of players:Experiments, logics and cognitive models

    Get PDF
    How do people reason about their opponent in turn-taking games? Often, people do not make the decisions that game theory would prescribe. We present a logic that can play a key role in understanding how people make their decisions, by delineating all plausible reasoning strategies in a systematic manner. This in turn makes it possible to construct a corresponding set of computational models in a cognitive architecture. These models can be run and fitted to the participants’ data in terms of decisions, response times, and answers to questions. We validate these claims on the basis of an earlier game-theoretic experiment about the turn-taking game “Marble Drop with Surprising Opponent”, in which the opponent often starts with a seemingly irrational move. We explore two ways of segregating the participants into reasonable “player types”. The first way is based on latent class analysis, which divides the players into three classes according to their first decisions in the game: Random players, Learners, and Expected players, who make decisions consistent with forward induction. The second way is based on participants’ answers to a question about their opponent, classified according to levels of theory of mind: zero-order, first-order and second-order. It turns out that increasing levels of decisions and theory of mind both correspond to increasing success as measured by monetary awards and increasing decision times. Next, we use the logical language to express different kinds of strategies that people apply when reasoning about their opponent and making decisions in turn-taking games, as well as the ‘reasoning types’ reflected in their behavior. Then, we translate the logical formulas into computational cognitive models in the PRIMs architecture. Finally, we run two of the resulting models, corresponding to the strategy of only being interested in one’s own payoff and to the myopic strategy, in which one can only look ahead to a limited number of nodes. It turns out that the participant data fit to the own-payoff strategy, not the myopic one. The article closes the circle from experiments via logic and cognitive modelling back to predictions about new experiments

    The role of expertise in dynamic risk assessment: A reflection of the problem-solving strategies used by experienced fireground commanders

    Get PDF
    Although the concept of dynamic risk assessment has in recent times become more topical in the training manuals of most high risk domains, only a few empirical studies have reported how experts actually carry out this crucial task. The knowledge gap between research and practice in this area therefore calls for more empirical investigation within the naturalistic environment. In this paper, we present and discuss the problem solving strategies employed by sixteen experienced operational firefighters using a qualitative knowledge elicitation tool — the critical decision method. Findings revealed that dynamic risk assessment is not merely a process of weighing the risks of a proposed course of action against its benefits, but rather an experiential and pattern recognition process. The paper concludes by discussing the implications of designing training curriculum for the less experienced officers using the elicited expert knowledge

    Big data for bipolar disorder

    Get PDF

    Cognitive Architectures as Scaffolding for Risky Choice Models

    No full text
    Debates in decision making, such as the debate about the empirical validity of the priority heuristic, a model of risky choice, are sometimes difficult to resolve, because hypotheses about decision processes are either formulated qualitatively or not precisely enough. This lack of precision often leaves empirical tests with response times and other detailed behavioral data inconclusive. One way to increase the precision of decision models is to implement them in broad cognitive frameworks such as the cognitive architecture ACT-R. ACT-R can be used to construct detailed process models of how people make, for example, risky choices, and to derive process predictions about, among others, eye movements, absolute response times, or brain activation. These precise process models make explicit their underlying assumptions, which facilitate direct model comparisons and make the models amenable to strict empirical tests. We demonstrate the level of detail that ACT-R provides with an ACT-R implementation of the inferential heuristic take-the-best. We end by addressing the question of why cognitive architectures are still not widespread in judgment and decision making
    corecore